Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2023 Jan 18.
Artigo em Inglês | MEDLINE | ID: mdl-36711687

RESUMO

Human cortical responses to natural sounds, measured with fMRI, can be approximated as the weighted sum of a small number of canonical response patterns (components), each having interpretable functional and anatomical properties. Here, we asked whether this organization is preserved in cases where only one temporal lobe is available due to early brain damage by investigating a unique family: one sibling born without a left temporal lobe, another without a right temporal lobe, and a third anatomically neurotypical. We analyzed fMRI responses to diverse natural sounds within the intact hemispheres of these individuals and compared them to 12 neurotypical participants. All siblings manifested the neurotypical auditory responses in their intact hemispheres. These results suggest that the development of the auditory cortex in each hemisphere does not depend on the existence of the other hemisphere, highlighting the redundancy and equipotentiality of the bilateral auditory system.

2.
Adv Neural Inf Process Syst ; 36: 638-654, 2023 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-38434255

RESUMO

Modern language models excel at integrating across long temporal scales needed to encode linguistic meaning and show non-trivial similarities to biological neural systems. Prior work suggests that human brain responses to language exhibit hierarchically organized "integration windows" that substantially constrain the overall influence of an input token (e.g., a word) on the neural response. However, little prior work has attempted to use integration windows to characterize computations in large language models (LLMs). We developed a simple word-swap procedure for estimating integration windows from black-box language models that does not depend on access to gradients or knowledge of the model architecture (e.g., attention weights). Using this method, we show that trained LLMs exhibit stereotyped integration windows that are well-fit by a convex combination of an exponential and a power-law function, with a partial transition from exponential to power-law dynamics across network layers. We then introduce a metric for quantifying the extent to which these integration windows vary with structural boundaries (e.g., sentence boundaries), and using this metric, we show that integration windows become increasingly yoked to structure at later network layers. None of these findings were observed in an untrained model, which as expected integrated uniformly across its input. These results suggest that LLMs learn to integrate information in natural language using a stereotyped pattern: integrating across position-yoked, exponential windows at early layers, followed by structure-yoked, power-law windows at later layers. The methods we describe in this paper provide a general-purpose toolkit for understanding temporal integration in language models, facilitating cross-disciplinary research at the intersection of biological and artificial intelligence.

4.
Nat Hum Behav ; 6(3): 455-469, 2022 03.
Artigo em Inglês | MEDLINE | ID: mdl-35145280

RESUMO

To derive meaning from sound, the brain must integrate information across many timescales. What computations underlie multiscale integration in human auditory cortex? Evidence suggests that auditory cortex analyses sound using both generic acoustic representations (for example, spectrotemporal modulation tuning) and category-specific computations, but the timescales over which these putatively distinct computations integrate remain unclear. To answer this question, we developed a general method to estimate sensory integration windows-the time window when stimuli alter the neural response-and applied our method to intracranial recordings from neurosurgical patients. We show that human auditory cortex integrates hierarchically across diverse timescales spanning from ~50 to 400 ms. Moreover, we find that neural populations with short and long integration windows exhibit distinct functional properties: short-integration electrodes (less than ~200 ms) show prominent spectrotemporal modulation selectivity, while long-integration electrodes (greater than ~200 ms) show prominent category selectivity. These findings reveal how multiscale integration organizes auditory computation in the human brain.


Assuntos
Córtex Auditivo , Estimulação Acústica/métodos , Percepção Auditiva , Encéfalo , Mapeamento Encefálico/métodos , Humanos
5.
Curr Biol ; 32(7): 1470-1484.e12, 2022 04 11.
Artigo em Inglês | MEDLINE | ID: mdl-35196507

RESUMO

How is music represented in the brain? While neuroimaging has revealed some spatial segregation between responses to music versus other sounds, little is known about the neural code for music itself. To address this question, we developed a method to infer canonical response components of human auditory cortex using intracranial responses to natural sounds, and further used the superior coverage of fMRI to map their spatial distribution. The inferred components replicated many prior findings, including distinct neural selectivity for speech and music, but also revealed a novel component that responded nearly exclusively to music with singing. Song selectivity was not explainable by standard acoustic features, was located near speech- and music-selective responses, and was also evident in individual electrodes. These results suggest that representations of music are fractionated into subpopulations selective for different types of music, one of which is specialized for the analysis of song.


Assuntos
Córtex Auditivo , Música , Percepção da Fala , Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico/métodos , Humanos , Fala/fisiologia , Percepção da Fala/fisiologia
6.
Elife ; 102021 11 18.
Artigo em Inglês | MEDLINE | ID: mdl-34792467

RESUMO

Little is known about how neural representations of natural sounds differ across species. For example, speech and music play a unique role in human hearing, yet it is unclear how auditory representations of speech and music differ between humans and other animals. Using functional ultrasound imaging, we measured responses in ferrets to a set of natural and spectrotemporally matched synthetic sounds previously tested in humans. Ferrets showed similar lower-level frequency and modulation tuning to that observed in humans. But while humans showed substantially larger responses to natural vs. synthetic speech and music in non-primary regions, ferret responses to natural and synthetic sounds were closely matched throughout primary and non-primary auditory cortex, even when tested with ferret vocalizations. This finding reveals that auditory representations in humans and ferrets diverge sharply at late stages of cortical processing, potentially driven by higher-order processing demands in speech and music.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Furões/fisiologia , Som , Estimulação Acústica , Animais , Humanos
7.
Cognition ; 214: 104627, 2021 09.
Artigo em Inglês | MEDLINE | ID: mdl-34044231

RESUMO

Sound is caused by physical events in the world. Do humans infer these causes when recognizing sound sources? We tested whether the recognition of common environmental sounds depends on the inference of a basic physical variable - the source intensity (i.e., the power that produces a sound). A source's intensity can be inferred from the intensity it produces at the ear and its distance, which is normally conveyed by reverberation. Listeners could thus use intensity at the ear and reverberation to constrain recognition by inferring the underlying source intensity. Alternatively, listeners might separate these acoustic cues from their representation of a sound's identity in the interest of invariant recognition. We compared these two hypotheses by measuring recognition accuracy for sounds with typically low or high source intensity (e.g., pepper grinders vs. trucks) that were presented across a range of intensities at the ear or with reverberation cues to distance. The recognition of low-intensity sources (e.g., pepper grinders) was impaired by high presentation intensities or reverberation that conveyed distance, either of which imply high source intensity. Neither effect occurred for high-intensity sources. The results suggest that listeners implicitly use the intensity at the ear along with distance cues to infer a source's power and constrain its identity. The recognition of real-world sounds thus appears to depend upon the inference of their physical generative parameters, even generative parameters whose cues might otherwise be separated from the representation of a sound's identity.


Assuntos
Percepção Auditiva , Localização de Som , Estimulação Acústica , Sinais (Psicologia) , Humanos , Som
8.
J Neurophysiol ; 125(6): 2237-2263, 2021 06 01.
Artigo em Inglês | MEDLINE | ID: mdl-33596723

RESUMO

Recent work has shown that human auditory cortex contains neural populations anterior and posterior to primary auditory cortex that respond selectively to music. However, it is unknown how this selectivity for music arises. To test whether musical training is necessary, we measured fMRI responses to 192 natural sounds in 10 people with almost no musical training. When voxel responses were decomposed into underlying components, this group exhibited a music-selective component that was very similar in response profile and anatomical distribution to that previously seen in individuals with moderate musical training. We also found that musical genres that were less familiar to our participants (e.g., Balinese gamelan) produced strong responses within the music component, as did drum clips with rhythm but little melody, suggesting that these neural populations are broadly responsive to music as a whole. Our findings demonstrate that the signature properties of neural music selectivity do not require musical training to develop, showing that the music-selective neural populations are a fundamental and widespread property of the human brain.NEW & NOTEWORTHY We show that music-selective neural populations are clearly present in people without musical training, demonstrating that they are a fundamental and widespread property of the human brain. Additionally, we show music-selective neural populations respond strongly to music from unfamiliar genres as well as music with rhythm but little pitch information, suggesting that they are broadly responsive to music as a whole.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Mapeamento Encefálico , Música , Prática Psicológica , Adulto , Mapeamento Encefálico/métodos , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
9.
Front Neurosci ; 13: 1165, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31736698

RESUMO

Machine learning classification techniques are frequently applied to structural and resting-state fMRI data to identify brain-based biomarkers for developmental disorders. However, task-related fMRI has rarely been used as a diagnostic tool. Here, we used structural MRI, resting-state connectivity and task-based fMRI data to detect congenital amusia, a pitch-specific developmental disorder. All approaches discriminated amusics from controls in meaningful brain networks at similar levels of accuracy. Interestingly, the classifier outcome was specific to deficit-related neural circuits, as the group classification failed for fMRI data acquired during a verbal task for which amusics were unimpaired. Most importantly, classifier outputs of task-related fMRI data predicted individual behavioral performance on an independent pitch-based task, while this relationship was not observed for structural or resting-state data. These results suggest that task-related imaging data can potentially be used as a powerful diagnostic tool to identify developmental disorders as they allow for the prediction of symptom severity.

10.
Nat Neurosci ; 22(7): 1057-1060, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-31182868

RESUMO

We report a difference between humans and macaque monkeys in the functional organization of cortical regions implicated in pitch perception. Humans but not macaques showed regions with a strong preference for harmonic sounds compared to noise, measured with both synthetic tones and macaque vocalizations. In contrast, frequency-selective tonotopic maps were similar between the two species. This species difference may be driven by the unique demands of speech and music perception in humans.


Assuntos
Córtex Auditivo/fisiologia , Percepção da Altura Sonora/fisiologia , Percepção da Fala/fisiologia , Estimulação Acústica , Adulto , Animais , Córtex Auditivo/anatomia & histologia , Mapeamento Encefálico , Feminino , Humanos , Macaca mulatta , Imageamento por Ressonância Magnética , Masculino , Música , Especificidade da Espécie , Vocalização Animal
11.
PLoS Biol ; 16(12): e2005127, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30507943

RESUMO

A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and "model-matched" stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex-in which spectrogram-like peripheral input is processed by linear spectrotemporal filters-can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva/fisiologia , Estimulação Acústica/métodos , Adulto , Algoritmos , Feminino , Humanos , Modelos Lineares , Imageamento por Ressonância Magnética/métodos , Masculino , Modelos Neurológicos , Música , Som , Fala/fisiologia
12.
Neuron ; 98(3): 630-644.e16, 2018 05 02.
Artigo em Inglês | MEDLINE | ID: mdl-29681533

RESUMO

A core goal of auditory neuroscience is to build quantitative models that predict cortical responses to natural sounds. Reasoning that a complete model of auditory cortex must solve ecologically relevant tasks, we optimized hierarchical neural networks for speech and music recognition. The best-performing network contained separate music and speech pathways following early shared processing, potentially replicating human cortical organization. The network performed both tasks as well as humans and exhibited human-like errors despite not being optimized to do so, suggesting common constraints on network and human performance. The network predicted fMRI voxel responses substantially better than traditional spectrotemporal filter models throughout auditory cortex. It also provided a quantitative signature of cortical representational hierarchy-primary and non-primary responses were best predicted by intermediate and late network layers, respectively. The results suggest that task optimization provides a powerful set of tools for modeling sensory systems.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/diagnóstico por imagem , Córtex Auditivo/fisiologia , Imageamento por Ressonância Magnética/métodos , Rede Nervosa/diagnóstico por imagem , Rede Nervosa/fisiologia , Desempenho Psicomotor/fisiologia , Adolescente , Adulto , Idoso , Feminino , Previsões , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
13.
J Neurosci ; 36(10): 2986-94, 2016 Mar 09.
Artigo em Inglês | MEDLINE | ID: mdl-26961952

RESUMO

Congenital amusia is a lifelong deficit in music perception thought to reflect an underlying impairment in the perception and memory of pitch. The neural basis of amusic impairments is actively debated. Some prior studies have suggested that amusia stems from impaired connectivity between auditory and frontal cortex. However, it remains possible that impairments in pitch coding within auditory cortex also contribute to the disorder, in part because prior studies have not measured responses from the cortical regions most implicated in pitch perception in normal individuals. We addressed this question by measuring fMRI responses in 11 subjects with amusia and 11 age- and education-matched controls to a stimulus contrast that reliably identifies pitch-responsive regions in normal individuals: harmonic tones versus frequency-matched noise. Our findings demonstrate that amusic individuals with a substantial pitch perception deficit exhibit clusters of pitch-responsive voxels that are comparable in extent, selectivity, and anatomical location to those of control participants. We discuss possible explanations for why amusics might be impaired at perceiving pitch relations despite exhibiting normal fMRI responses to pitch in their auditory cortex: (1) individual neurons within the pitch-responsive region might exhibit abnormal tuning or temporal coding not detectable with fMRI, (2) anatomical tracts that link pitch-responsive regions to other brain areas (e.g., frontal cortex) might be altered, and (3) cortical regions outside of pitch-responsive cortex might be abnormal. The ability to identify pitch-responsive regions in individual amusic subjects will make it possible to ask more precise questions about their role in amusia in future work.


Assuntos
Transtornos da Percepção Auditiva/complicações , Transtornos da Percepção Auditiva/patologia , Córtex Cerebral/fisiopatologia , Discriminação da Altura Tonal/fisiologia , Estimulação Acústica , Adolescente , Adulto , Estudos de Casos e Controles , Córtex Cerebral/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Análise de Regressão , Adulto Jovem
14.
Neuroimage ; 129: 401-413, 2016 Apr 01.
Artigo em Inglês | MEDLINE | ID: mdl-26827809

RESUMO

Nonlinearities in the cochlea can introduce audio frequencies that are not present in the sound signal entering the ear. Known as distortion products (DPs), these added frequencies complicate the interpretation of auditory experiments. Sound production systems also introduce distortion via nonlinearities, a particular concern for fMRI research because the Sensimetrics earphones widely used for sound presentation are less linear than most high-end audio devices (due to design constraints). Here we describe the acoustic and neural effects of cochlear and earphone distortion in the context of fMRI studies of pitch perception, and discuss how their effects can be minimized with appropriate stimuli and masking noise. The amplitude of cochlear and Sensimetrics earphone DPs were measured for a large collection of harmonic stimuli to assess effects of level, frequency, and waveform amplitude. Cochlear DP amplitudes were highly sensitive to the absolute frequency of the DP, and were most prominent at frequencies below 300 Hz. Cochlear DPs could thus be effectively masked by low-frequency noise, as expected. Earphone DP amplitudes, in contrast, were highly sensitive to both stimulus and DP frequency (due to prominent resonances in the earphone's transfer function), and their levels grew more rapidly with increasing stimulus level than did cochlear DP amplitudes. As a result, earphone DP amplitudes often exceeded those of cochlear DPs. Using fMRI, we found that earphone DPs had a substantial effect on the response of pitch-sensitive cortical regions. In contrast, cochlear DPs had a small effect on cortical fMRI responses that did not reach statistical significance, consistent with their lower amplitudes. Based on these findings, we designed a set of pitch stimuli optimized for identifying pitch-responsive brain regions using fMRI. These stimuli robustly drive pitch-responsive brain regions while producing minimal cochlear and earphone distortion, and will hopefully aid fMRI researchers in avoiding distortion confounds.


Assuntos
Estimulação Acústica , Artefatos , Imageamento por Ressonância Magnética/métodos , Percepção da Altura Sonora/fisiologia , Adulto , Cóclea/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Adulto Jovem
15.
Neuron ; 88(6): 1281-1296, 2015 Dec 16.
Artigo em Inglês | MEDLINE | ID: mdl-26687225

RESUMO

The organization of human auditory cortex remains unresolved, due in part to the small stimulus sets common to fMRI studies and the overlap of neural populations within voxels. To address these challenges, we measured fMRI responses to 165 natural sounds and inferred canonical response profiles ("components") whose weighted combinations explained voxel responses throughout auditory cortex. This analysis revealed six components, each with interpretable response characteristics despite being unconstrained by prior functional hypotheses. Four components embodied selectivity for particular acoustic features (frequency, spectrotemporal modulation, pitch). Two others exhibited pronounced selectivity for music and speech, respectively, and were not explainable by standard acoustic features. Anatomically, music and speech selectivity concentrated in distinct regions of non-primary auditory cortex. However, music selectivity was weak in raw voxel responses, and its detection required a decomposition method. Voxel decomposition identifies primary dimensions of response variation across natural sounds, revealing distinct cortical pathways for music and speech.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Vias Auditivas/fisiologia , Música , Fala/fisiologia , Adulto , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Adulto Jovem
16.
J Neurosci ; 33(50): 19451-69, 2013 Dec 11.
Artigo em Inglês | MEDLINE | ID: mdl-24336712

RESUMO

Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce "resolved" peaks of excitation in the cochlea, whereas others are "unresolved," providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Percepção da Altura Sonora/fisiologia , Adulto , Vias Auditivas/fisiologia , Feminino , Neuroimagem Funcional , Humanos , Imageamento por Ressonância Magnética , Masculino , Discriminação da Altura Tonal/fisiologia
17.
J Neurophysiol ; 108(12): 3289-300, 2012 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-23019005

RESUMO

Evidence from brain-damaged patients suggests that regions in the temporal lobes, distinct from those engaged in lower-level auditory analysis, process the pitch and rhythmic structure in music. In contrast, neuroimaging studies targeting the representation of music structure have primarily implicated regions in the inferior frontal cortices. Combining individual-subject fMRI analyses with a scrambling method that manipulated musical structure, we provide evidence of brain regions sensitive to musical structure bilaterally in the temporal lobes, thus reconciling the neuroimaging and patient findings. We further show that these regions are sensitive to the scrambling of both pitch and rhythmic structure but are insensitive to high-level linguistic structure. Our results suggest the existence of brain regions with representations of musical structure that are distinct from high-level linguistic representations and lower-level acoustic representations. These regions provide targets for future research investigating possible neural specialization for music or its associated mental processes.


Assuntos
Estimulação Acústica/métodos , Percepção Auditiva/fisiologia , Música , Lobo Temporal/fisiologia , Adolescente , Adulto , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa/métodos , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...